probability 2
Nonlinear Performative Prediction
Zhong, Guangzheng, Liu, Yang, Liu, Jiming
Performative prediction is an emerging paradigm in machine learning that addresses scenarios where the model's prediction may induce a shift in the distribution of the data it aims to predict. Current works in this field often rely on uncontrollable assumptions, such as bounded gradients of performative loss, and primarily focus on linear cases in their examples and evaluations to maintain consistency between theoretical guarantees and empirical validations. However, such linearity rarely holds in real-world applications, where the data usually exhibit complex nonlinear characteristics. In this paper, we relax these out-of-control assumptions and present a novel design that generalizes performative prediction to nonlinear cases while preserving essential theoretical properties. Specifically, we formulate the loss function of performative prediction using a maximum margin approach and extend it to nonlinear spaces through kernel methods. To quantify the data distribution shift, we employ the discrepancy between prediction errors on these two distributions as an indicator, which characterizes the impact of the performative effect on specific learning tasks. By doing so, we can derive, for both linear and nonlinear cases, the conditions for performative stability, a critical and desirable property in performative contexts. Building on these theoretical insights, we develop an algorithm that guarantees the performative stability of the predictive model. We validate the effectiveness of our method through experiments on synthetic and real-world datasets with both linear and nonlinear data distributions, demonstrating superior performance compared to state-of-the-art baselines.
- Asia > China > Hong Kong (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Portugal > Braga > Braga (0.04)
- (2 more...)
Machine Learning Full Course - Learn Machine Learning 10 Hours Machine Learning Tutorial Edureka
This Machine Learning Tutorial is ideal for both beginners as well as professionals who want to master Machine Learning Algorithms. Below are the topics covered in this Machine Learning Tutorial for Beginners video: 2:47 What is Machine Learning? Please share it in the comment section below and our experts will answer it for you. For more information, please write back to us at sales@edureka.in or call us at IND: 9606058406 / US: 18338555775 (toll-free).
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.37)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Clustering (0.34)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.31)
Kernel Logistic Regression and the Import Vector Machine
The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an ongoing research issue. In this paper, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in binary classification, but also can naturally be generalized to the multi-class case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the "support points" of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a computational advantage over the SVM, especially when the size of the training data set is large.
- North America > United States > Wisconsin > Dane County > Madison (0.15)
- North America > United States > California > Santa Clara County > Stanford (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > New York (0.04)
- Research Report > New Finding (0.71)
- Research Report > Experimental Study (0.71)
Kernel Logistic Regression and the Import Vector Machine
The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an ongoing research issue. In this paper, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs as well as the SVM in binary classification, but also can naturally be generalized to the multi-class case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the "support points" of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a computational advantage over the SVM, especially when the size of the training data set is large.
- North America > United States > Wisconsin > Dane County > Madison (0.15)
- North America > United States > California > Santa Clara County > Stanford (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > New York (0.04)
- Research Report > New Finding (0.71)
- Research Report > Experimental Study (0.71)
Kernel Logistic Regression and the Import Vector Machine
The support vector machine (SVM) is known for its good performance in binary classification, but its extension to multi-class classification is still an ongoing research issue. In this paper, we propose a new approach for classification, called the import vector machine (IVM), which is built on kernel logistic regression (KLR). We show that the IVM not only performs aswell as the SVM in binary classification, but also can naturally be generalized to the multi-class case. Furthermore, the IVM provides an estimate of the underlying probability. Similar to the "support points" of the SVM, the IVM model uses only a fraction of the training data to index kernel basis functions, typically a much smaller fraction than the SVM. This gives the IVM a computational advantage over the SVM, especially when the size of the training data set is large.
- North America > United States > Wisconsin > Dane County > Madison (0.15)
- North America > United States > California > Santa Clara County > Stanford (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- North America > United States > New York (0.04)
- Research Report > New Finding (0.71)
- Research Report > Experimental Study (0.71)